3,150 research outputs found

    Multiple Texture Boltzmann Machines

    Get PDF
    We assess the generative power of the mPoTmodel of [10] with tiled-convolutional weight sharing as a model for visual textures by specifically training on this task, evaluating model performance on texture synthesis and inpainting tasks using quantitative metrics. We also analyze the relative importance of the mean and covariance parts of the mPoT model by comparing its performance to those of its subcomponents, tiled-convolutional versions of the PoT/FoE and Gaussian-Bernoulli restricted Boltzmann machine (GB-RBM). Our results suggest that while state-of-the-art or better performance can be achieved using the mPoT, similar performance can be achieved with the mean-only model. We then develop a model for multiple textures based on the GB-RBM, using a shared set of weights but texturespecific hidden unit biases. We show comparable performance of the multiple texture model to individually trained texture models.

    A Generative Model for Parts-based Object Segmentation

    Get PDF
    The Shape Boltzmann Machine (SBM) [1] has recently been introduced as a stateof-the-art model of foreground/background object shape. We extend the SBM to account for the foreground object’s parts. Our new model, the Multinomial SBM (MSBM), can capture both local and global statistics of part shapes accurately. We combine the MSBM with an appearance model to form a fully generative model of images of objects. Parts-based object segmentations are obtained simply by performing probabilistic inference in the model. We apply the model to two challenging datasets which exhibit significant shape and appearance variability, and find that it obtains results that are comparable to the state-of-the-art. There has been significant focus in computer vision on object recognition and detection e.g. [2], but a strong desire remains to obtain richer descriptions of objects than just their bounding boxes. One such description is a parts-based object segmentation, in which an image is partitioned into multiple sets of pixels, each belonging to either a part of the object of interest, or its background. The significance of parts in computer vision has been recognized since the earliest days of th

    Regression with Gaussian processes

    Get PDF
    The Bayesian analysis of neural networks is difficult because the prior over functions has a complex form, leading to implementations that either make approximations or use Monte Carlo integration techniques. In this paper I investigate the use of Gaussian process priors over functions, which permit the predictive Bayesian analysis to be carried out exactly using matrix operations. The method has been tested on two challenging problems and has produced excellent results

    On Suspicious Coincidences and Pointwise Mutual Information

    Get PDF

    The Effect of Class Imbalance on Precision-Recall Curves

    Get PDF
    In this note I study how the precision of a classifier depends on the ratio rr of positive to negative cases in the test set, as well as the classifier's true and false positive rates. This relationship allows prediction of how the precision-recall curve will change with rr, which seems not to be well known. It also allows prediction of how FβF_{\beta} and the Precision Gain and Recall Gain measures of Flach and Kull (2015) vary with rr.Comment: 4 pages, 1 figur

    Combining spatially distributed predictions from neural networks

    Get PDF
    In this report we discuss the problem of combining spatially-distributed predictions from neural networks. An example of this problem is the prediction of a wind vector-field from remote-sensing data by combining bottom-up predictions (wind vector predictions on a pixel-by-pixel basis) with prior knowledge about wind-field configurations. This task can be achieved using the scaled-likelihood method, which has been used by Morgan and Bourlard (1995) and Smyth (1994), in the context of Hidden Markov modellin

    The Elliptical Quartic Exponential Distribution: An Annular Distribution Obtained via Maximum Entropy

    Full text link
    This paper describes the Elliptical Quartic Exponential distribution in RD\mathbb{R}^D, obtained via a maximum entropy construction by imposing second and fourth moment constraints. I discuss relationships to related work, analytical expressions for the normalization constant and the entropy, and the conditional and marginal distributions.Comment: 6 pages, 1 figur

    Learning generative texture models with extended Fields-of-Experts

    Get PDF
    We evaluate the ability of the popular Field-of-Experts (FoE) to model structure in images. As a test case we focus on modeling synthetic and natural textures. We find that even for modeling single textures, the FoE provides insufficient flexibility to learn good generative models – it does not perform any better than the much simpler Gaussian FoE. We propose an extended version of the FoE (allowing for bimodal potentials) and demonstrate that this novel formulation, when trained with a better approximation of the likelihood gradient, gives rise to a more powerful generative model of specific visual structure that produces significantly better results for the texture task
    corecore